You can find the slides (and additional materials) here: https://github.com/jstbcs/ws-bayesian-stats-r.
You can find the slides (and additional materials) here: https://github.com/jstbcs/ws-bayesian-stats-r.
library(diagram)
par(mar = c(0, 0, 0, 0))
names <- c("LL", "LH", "HL", "HH"); o <- 1:4
M <- matrix(nrow = length(o), ncol = length(o), data=0)
M[1, 2] <- ''; M[1, 3] <- ''; M[2, 4] <- ''; M[3, 4] <- ''
plotmat(M, pos = c(1, 2, 1), name = names[c(4, 2, 3, 1)]
, curve = 0, box.type="round", box.size=.06, box.prop=.8
, box.col = myCol[c(4, 2, 3, 1)], arr.length=0, box.cex = 1.2
, relsize = 1, shadow.size = 0.007)
Option 1
Option 1
M <- 100000
LL <- rcauchy(M)
LH <- rcauchy(M)
HL <- rcauchy(M)
HH <- rcauchy(M)
mean(LL < LH & LL < HL &
LH < HH & HL < HH)
## [1] 0.0834
Option 2
\[ \begin{align} HH &= \mu + \theta_1 + \theta_2 + \theta_3\\ LH &= \mu - \theta_1 + \theta_2 - \theta_3\\ HL &= \mu + \theta_1 - \theta_2 - \theta_3\\ LL &= \mu - \theta_1 - \theta_2 + \theta_3\\ \end{align} \]
Option 2
\[ \begin{align} HH &= \mu + \theta_1 + \theta_2 + \theta_3\\ LH &= \mu - \theta_1 + \theta_2 - \theta_3\\ HL &= \mu + \theta_1 - \theta_2 - \theta_3\\ LL &= \mu - \theta_1 - \theta_2 + \theta_3\\ \end{align} \]
\[ \begin{align} HH - LH > 0 \Leftrightarrow \theta_1 + \theta_3 > 0\\ HH - HL > 0 \Leftrightarrow \theta_2 + \theta_3 > 0\\ LH - LL > 0 \Leftrightarrow \theta_2 - \theta_3 > 0\\ HL - LL > 0 \Leftrightarrow \theta_1 - \theta_3 > 0\\ \end{align} \]
\[ \begin{align} HH - LH > 0 \Leftrightarrow \theta_1 + \theta_3 > 0\\ HH - HL > 0 \Leftrightarrow \theta_2 + \theta_3 > 0\\ LH - LL > 0 \Leftrightarrow \theta_2 - \theta_3 > 0\\ HL - LL > 0 \Leftrightarrow \theta_1 - \theta_3 > 0\\ \end{align} \]
M <- 100000
theta1 <- rcauchy(M)
theta2 <- rcauchy(M)
theta3 <- rcauchy(M)
mean(theta1 + theta3 > 0 &
theta2 + theta3 > 0 &
theta2 - theta3 > 0 &
theta1 - theta3 > 0)
## [1] 0.08201
Check out worksheet 1 on github!
c(est.disgust[1, "disgust-0"], est.disgust[1, "disgust-1"])
## disgust-0 disgust-1 ## -0.3538381 0.3538381
Gelman (2005):
Efron & Morris (1977)
When a sample exhausts the population, the corresponding variable is fixed; when the sample is a small part of the population the corresponding variable is random (Green & Tukey, 1960).
BayesFactorwhichRandom is the important argument.BayesFactorwhichRandom is the important argument.mod.gen <- BayesFactor::lmBF(value ~ disgust + fear + disgust:fear + Subject
, data = datl
, whichRandom = "Subject"
, rscaleEffects = c("disgust" = 1/2
, "fear" = 1/2
, "disgust:fear" = 1/3
, "Sub" = 1/4))
BayesFactorBayesFactor was developed they were pretty confident about random intercepts, but a bit hesitant about random slopes.BayesFactorBayesFactor was developed they were pretty confident about random intercepts, but a bit hesitant about random slopes.## Bayes factor analysis ## -------------- ## [1] cond : 28348.34 ±0% ## [2] sub : 1.416292e+55 ±0.01% ## [3] cond + sub : 7.764012e+59 ±1.46% ## [4] cond + sub + cond:sub : 7.149918e+75 ±1.01% ## ## Against denominator: ## Intercept only ## --- ## Bayes factor type: BFlinearModel, JZS
Let’s start with the model setup again.
Let’s start with the model setup again.
\[Y_{ijk} \sim \mbox{Normal}(\mu + \alpha_i + x_j \theta, \sigma^2)\]
Let’s start with the model setup again.
\[Y_{ijk} \sim \mbox{Normal}(\mu + \alpha_i + x_j \theta, \sigma^2).\]
Random intercept \(\alpha_i\) has a distribution (not the prior):
\[\alpha_i \sim \mbox{Normal}(0, g_\alpha \sigma^2).\]
Let’s start with the model setup again.
\[Y_{ijk} \sim \mbox{Normal}(\mu + \alpha_i + x_j \theta, \sigma^2).\]
Random intercept \(\alpha_i\) has a distribution (not the prior):
\[\alpha_i \sim \mbox{Normal}(0, g_\alpha \sigma^2).\]
Prior on \(g_\alpha\), the variance scaling factor:
\[g_\alpha \sim \mbox{Inverse-}\chi^2(r_\alpha).\]
Three observations:
posterior() and look at the estimates the last columns are always “g”. Now you know why. :)BayesFactor: \(\frac{\theta}{\sigma}\)Check out worksheet 2 on github!
Von Bastian, Souza, & Gade (2015)
Haaf (2018); Haaf & Rouder (2017)
\[\theta_i = 0\]
\[\theta_i = \nu\] \[\nu \sim \mbox{Truncated-Normal}(0, \eta^2)\]
\[\theta_i \sim \mbox{Truncated-Normal}(\nu, \tau^2)\] \[\nu \sim \mbox{Truncated-Normal}(0, \eta^2)\]
\[\theta_i \sim \mbox{Normal}(\nu, \tau^2)\] \[\nu \sim \mbox{Normal}(0, \eta^2)\]
In worksheet 3. :)
whichConstraints specifies the ordinal constraints tested.Rouder, Lu, Speckman, Sun, & Jiang (2005)
Rouder, Lu, Speckman, Sun, & Jiang (2005)
Rouder, Lu, Speckman, Sun, & Jiang (2005)
RInstall and load quid.
# load libraries
library(devtools)
install_github("lukasklima/quid")
library(quid)
## ## Attaching package: 'quid'
## The following object is masked _by_ '.GlobalEnv': ## ## stroop
RDocumentation is still under development…
# check what arguments the function takes args(quid:::constraintBF)
## function (formula, data, whichRandom, ID, whichConstraint, rscaleEffects, ## iterationsPosterior = 10000, iterationsPrior = iterationsPosterior * ## 10, burnin = 1000, ...) ## NULL
## LD5 # inspect data str(ld5)
## 'data.frame': 17031 obs. of 9 variables: ## $ sub : Factor w/ 52 levels "0","1","2","3",..: 1 1 1 1 1 1 1 1 1 1 ... ## $ block : int 0 0 0 0 0 0 0 0 0 0 ... ## $ trial : int 20 21 22 23 24 26 27 28 29 30 ... ## $ stim : int 5 1 3 4 1 5 0 4 4 0 ... ## $ resp : int 1 0 1 1 0 1 0 1 1 0 ... ## $ rt : int 470 476 507 603 493 535 463 431 569 509 ... ## $ error : int 0 0 0 0 0 0 0 0 0 0 ... ## $ side : Factor w/ 2 levels "1","2": 2 1 2 2 1 2 1 2 2 1 ... ## $ distance: Factor w/ 3 levels "1","2","3": 3 2 1 2 2 3 3 2 2 3 ...
?ld5
R# analysis
resLD5 <- quid:::constraintBF(formula = rt ~ sub * distance + side,
data = ld5,
whichRandom = c("sub"),
ID = "sub",
whichConstraint = c(distance = "1 > 2", distance = "2>3"))
R # analysis
resLD5 <- quid:::constraintBF(formula = rt ~ sub * distance + side,
data = ld5,
whichRandom = c("sub"),
ID = "sub",
whichConstraint = c(distance = "1 > 2", distance = "2>3"),
rscaleEffects = c("sub" = 1,
"side" = 1/6,
"distance" = 1/6,
"sub:distance" = 1/10))
RBF for the analog representation model:
resLD5
## ## Bayes factor analysis ## -------------- ## [1] sub : 4.36765e+1044 ±0.01% ## [2] distance : 1.523779e+41 ±0.01% ## [3] sub + distance : 1.439468e+1100 ±0.7% ## [4] sub + distance + sub:distance : 1.340429e+1100 ±0.66% ## [5] side : 0.052699 ±0% ## [6] sub + side : 2.340764e+1043 ±1.62% ## [7] distance + side : 7.511926e+39 ±0.93% ## [8] sub + distance + side : 7.387908e+1098 ±1.35% ## [9] sub + distance + sub:distance + side : 7.337836e+1098 ±2.13% ## ## Against denominator: ## Intercept only ## --- ## Bayes factor type: BFlinearModel, JZS ## ## ========================= ## ## Constraints analysis ## -------------- ## Bayes factor : 23.73737 ## Posterior probability : 0.05222222 ## Prior probability : 0.0022 ## ## Constraints defined: ## distance : 2 < 1 ## distance : 3 < 2
BF for the propositional representation model:
bf_p0 <- resLD5@generalTestObj[6]@bayesFactor[1] bf_u0 <- resLD5@generalTestObj[9]@bayesFactor[1] exp(bf_p0 - bf_u0)
## bf ## sub + side 3.189993e-56
Comparing analog to propositional model:
resLD5@constraints@bayesFactor / exp(bf_p0 - bf_u0)
## bf ## sub + side 7.4412e+56
Check out worksheet 3 on github!
Haaf, J. M. (2018). A hierarchical Bayesian analysis of multiple order constraints in behavioral science (PhD thesis). University of Missouri.
Haaf, J. M., & Rouder, J. N. (2017). Developing constraint in Bayesian mixed models. Psychological Methods, 22(4), 779–798.
Rouder, J. N., Lu, J., Speckman, P. L., Sun, D., & Jiang, Y. (2005). A hierarchical model for estimating response time distributions. Psychonomic Bulletin and Review, 12, 195–223.
Von Bastian, C. C., Souza, A. S., & Gade, M. (2015). No evidence for bilingual cognitive advantages: A test of four hypotheses. Journal of Experimental Psychology: General, 145(2), 246–258.